Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Spatial frequency divided attention network for ultrasound image segmentation
SHEN Xuewen, WANG Xiaodong, YAO Yu
Journal of Computer Applications    2021, 41 (6): 1828-1835.   DOI: 10.11772/j.issn.1001-9081.2020091470
Abstract435)      PDF (1917KB)(395)       Save
Aiming at the problems of medical ultrasound images such as many noisy points, fuzzy boundaries, and difficulty in defining the cardiac contours, a new Spatial Frequency Divided Attention Network for ultrasound image segmentation (SFDA-Net) was proposed. Firstly, with the help of Octave convolution, the high and low-frequency parallel processing of image in the entire network was realized to obtain more diverse information. Then, the Convolutional Block Attention Module (CBAM) was added for paying more attention to the effective information when image feature recovered, so as to reduce the loss of segmenting the entire target area. Finally Focal Tversky Loss was considered as the objective function to reduce the weights of simple samples and pay more attention on difficult samples, as well as decrease the errors introduced by pixel misjudgment between the categories. Through multiple sets of comparative experiments,it can be seen that with the parameter number lower than that of the original UNet++, SFDA-Net has the segmentation accuracy increased by 6.2 percentage points, Dice sore risen by 8.76 percentage points, mean Pixel Accuracy (mPA) improved to 84.09%, and mean Intersection Over Union (mIoU) increased to 75.79%. SFDA-Net steadily improves the network performance while reducing parameters, and makes the echocardiographic segmentation more accurate.
Reference | Related Articles | Metrics
Selection of express freight transportation schemes based on rough set over two universes
WANG Xiaorong, ZHANG Yuzhao, ZHANG Zhenjiang
Journal of Computer Applications    2021, 41 (5): 1500-1505.   DOI: 10.11772/j.issn.1001-9081.2020071123
Abstract211)      PDF (759KB)(410)       Save
Aiming at the problem of express freight scheme decision under multiple uncertain factors, the express freight scheme decision model and decision rule based on intuitionistic fuzzy rough set over two universes were proposed. Based on the intuitionistic fuzzy rough set theory over two universes, a fuzzy approximate space over two universes for express freight scheme decision was determined. The consumption degrees of fixed cost, transportation cost, transfer cost, carbon emission, transfer time and other transportation indices were regarded as intuitionistic fuzzy numbers, and the intuitionistic fuzzy relation between evaluation indices and transportation schemes were used to calculate the lower approximation set and upper approximation set, and the maximum intuitionistic index and Hamming closeness degree were introduced to determine the transportation scheme decision rules. Taking an express freight transportation line from Lanzhou to Beijing as the example, the optimal transportation scheme was selected from the 9 modes of transportation combined by road, ordinary speed railway and air according to the decision rules. Sensitivity analysis of transportation cost and transfer cost was performed to verify the accuracy of the results. The two optimal transportation schemes finally selected show the applicability of the intuitionistic fuzzy rough set over two universes on such problems.
Reference | Related Articles | Metrics
Design of FPGA accelerator with high parallelism for convolution neural network
WANG Xiaofeng, JIANG Penglong, ZHOU Hui, ZHAO Xiongbo
Journal of Computer Applications    2021, 41 (3): 812-819.   DOI: 10.11772/j.issn.1001-9081.2020060996
Abstract583)      PDF (1115KB)(779)       Save
Most of the algorithms based on Convolutional Neural Network (CNN) are computation-intensive and memory-intensive, so they are difficult to be applied in embedded fields such as aerospace, mobile robotics and smartphones which have low-power requirements. To solve this problem, a Field Programmable Gate Array (FPGA) accelerator with high parallelism for CNN was proposed. Firstly, four kinds of parallelism in CNN algorithm that can be used for FPGA acceleration were compared and studied. Then, a Multi-channel Convolutional Rotating-register Pipeline (MCRP) structure was proposed to concisely and effectively utilize the convolution kernel parallelism of CNN algorithm. Finally, using the strategy of input/output channel parallelism+convolution kernel parallelism, a CNN accelerator architecture with high parallelism was proposed based on MCRP structure, and to verify the design rationality of the architecture, it was deployed on the XCZU9EG chip of XILINX. Under the condition of making full use of the on-chip Digital Signal Processor (DSP) resources, the peak computing capacity of the proposed CNN accelerator reached 2 304 GOPS(Giga Operations Per Second). Taking SSD-300 algorithm as the test object, this CNN accelerator had the actual computing capacity of 1 830.33 GOPS, and the hardware utilization rate of 79.44%. Experimental results show that, the MCRP structure can effectively improve the computing capacity of CNN accelerator, and the CNN accelerator based on MCRP structure can generally meet the computing capacity requirements of most applications in the embedded fields.
Reference | Related Articles | Metrics
Citrus disease and insect pest area segmentation based on superpixel fast fuzzy C-means clustering and support vector machine
YUAN Qianqian, DENG Hongmin, WANG Xiaohang
Journal of Computer Applications    2021, 41 (2): 563-570.   DOI: 10.11772/j.issn.1001-9081.2020050645
Abstract445)      PDF (1737KB)(609)       Save
Focused on the existing problems that there are few image datasets of citrus diseases and insect pests, the targets of diseases and pests are complex and scattered, and are difficult to realize automatic location and segmentation, a segmentation method of agricultural citrus disease and pest areas based on Superpixel Fast Fuzzy C-means Clustering (SFFCM) and Support Vector Machine (SVM) was proposed. This method made full use of the advantages of SFFCM algorithm, which was fast and robust, and integrated the characteristics of spatial information, meanwhile, it did not require manual selection of samples in image segmentation like the traditional SVM. Firstly, the improved SFFCM segmentation algorithm was used to pre-segment the image to be segmented to obtain the foreground and background regions. Then, the erosion and dilation operations in morphology were used to narrow these two areas, and the training samples were automatically selected for SVM model training. Finally, the trained SVM classifier was used to segment the entire image. Experimental results show that compared with the following three methods:Fast and Robust Fuzzy C-means Clustering (FRFCM), the original SFFCM and Edge Guidance Network (EGNet), the proposed method has the average recall of 0.937 1, average precision of 0.941 8 and the average accuracy of 0.930 3, all of which are better than those of the comparison methods.
Reference | Related Articles | Metrics
Relation extraction model via attention-based graph convolutional network
WANG Xiaoxia, QIAN Xuezhong, SONG Wei
Journal of Computer Applications    2021, 41 (2): 350-356.   DOI: 10.11772/j.issn.1001-9081.2020081310
Abstract415)      PDF (995KB)(1704)       Save
Aiming at the problem of low information utilization rate of sentence dependency tree and poor feature extraction effect in relation extraction task, an Attention-guided Gate perceptual Graph Convolutional Network (Att-Gate-GCN) model was proposed. Firstly, a soft pruning strategy based on the attention mechanism was used to assign weights to the edges in the dependency tree through the attention mechanism, thus mining the effective information in the dependency tree and filtering the useless information at the same time. Secondly, a gate perceptual Graph Convolutional Network (GCN) structure was constructed, thus increasing the feature perception ability through the gating mechanism to obtain more robust relationship features, and combining the local and non-local dependency features in the dependency tree to further extract key information. Finally, the key information was input into the classifier, then the relationship category label was got. Experimental results indicate that, compared with the original graph convolutional network relation extraction model, the proposed model has the F1 score increased by 2.2 percentage points and 3.8 percentage points on SemEval2010-Task8 dataset and KBP37 dataset respectively, which makes full use of effective information, and improves the relation extraction ability of the model.
Reference | Related Articles | Metrics
Analysis of double-channel Chinese sentiment model integrating grammar rules
QIU Ningjia, WANG Xiaoxia, WANG Peng, WANG Yanchun
Journal of Computer Applications    2021, 41 (2): 318-323.   DOI: 10.11772/j.issn.1001-9081.2020050723
Abstract408)      PDF (1093KB)(1050)       Save
Concerning the problem that ignoring the grammar rules reduces the accuracy of classification when using Chinese text to perform sentiment analysis, a double-channel Chinese sentiment classification model integrating grammar rules was proposed, namely CB_Rule (grammar Rules of CNN and Bi-LSTM). First, the grammar rules were designed to extract information with more explicit sentiment tendencies, and the semantic features were extracted by using the local perception feature of Convolutional Neural Network (CNN). After that, considering the problem of possible ignorance of the context when processing rules, Bi-directional Long Short-Term Memory (Bi-LSTM) network was used to extract the global features containing contextual information, and the local features were fused and supplemented, so that the sentimental feature tendency information of CNN model was improved. Finally, the improved features were input into the classifier to perform the sentiment tendency judgment, and the Chinese sentiment model was constructed. The proposed model was compared with R-Bi-LSTM (Bi-LSTM for Chinese sentiment analysis combined with grammar Rules) and SCNN model (a travel review sentiment analysis model that combines Syntactic rules and CNN) on the Chinese e-commerce review text dataset. Experimental results show that the accuracy of the proposed model is increased by 3.7 percentage points and 0.6 percentage points respectively, indicating that the proposed CB_Rule model has a good classification effect.
Reference | Related Articles | Metrics
Image super-resolution reconstruction based on deep progressive back-projection attention network
HU Gaopeng, CHEN Ziliu, WANG Xiaoming, ZHANG Kaifang
Journal of Computer Applications    2020, 40 (7): 2077-2083.   DOI: 10.11772/j.issn.1001-9081.2019122155
Abstract477)      PDF (1931KB)(511)       Save
Focused on the problems of Single Image Super-Resolution (SISR) reconstruction methods, such as the loss of high frequency information during the process of image reconstruction, the introduction of noise during the process of upsampling and the difficulty of determining the interdependence relationships between the channels of the feature map, a deep progressive back-projection attention network was proposed. Firstly, a progressive upsampling method was used to gradually scale the Low Resolution (LR) image to a given magnification in order to alleviate problems such as high-frequency information loss caused by upsampling. Then, at each stage of progressive upsampling, iterative back-projection idea was merged to learn mapping relationship between High Resolution (HR) and LR feature maps and reduce the introduced noise in the upsampling process. Finally, the attention mechanism was used to dynamically allocate attention resources to the feature maps generated at different stages of the progressive back-projection network, so that the interdependence relationships between the feature maps were learned by the network model. Experimental results show that the proposed method can increase the Peak Signal-to-Noise Ratio (PSNR) by up to 3.16 dB and the structural similarity by up to 0.218 4.
Reference | Related Articles | Metrics
Face image inpainting method based on circular fields of feature parts
WANG Xiao, WEI Jiawang, YUAN Yubo
Journal of Computer Applications    2020, 40 (3): 847-853.   DOI: 10.11772/j.issn.1001-9081.2019071212
Abstract390)      PDF (1301KB)(368)       Save
To solve the problem of unreasonable structure and low efficiency of the example block-based image inpainting method, a method for face image inpainting based on circular fields of feature parts was proposed. Firstly, according to the distribution of feature points obtained by feature points localization, the face image was segmented into four circular fields to determine feature search domains. Then, in priority model, the attenuation trend of confidence term was changed in form of exponential function, and with the combination of structural gradient term, the priority was constrained by using local gradient information to improve structural connectivity of inpainting result. In the stage of matching patch search, according to relative position between target patch and each circular domain of feature part, the search domain of matching patch was determined to improve search efficiency. Finally, under the standard of structural similarity, face image inpainting with structural connectivity was completed by choosing the best matching patch. Compared with four state-of-the-art inpainting methods, the proposed method has the Peak Signal-to-Noise Ratio (PSNR) of inpainted image increased by 1.219 to 2.663 dB on average, and the time consumption reduced by 34.7% to 69.6% on average. The experimental results show that the proposed method is effective in maintaining structural connectivity and visual rationality of face image, and has excellent performance in accuracy and time consumption of inpainting.
Reference | Related Articles | Metrics
Octave convolution method for lymph node metastases detection
WEI Zhe, WANG Xiaohua
Journal of Computer Applications    2020, 40 (3): 723-727.   DOI: 10.11772/j.issn.1001-9081.2019071315
Abstract436)      PDF (886KB)(325)       Save
Focused on the problems of low accuracy and long time cost of manual detection of breast cancer lymph node metastasis, a neural network detection model based on residual network structure and with Octave convolution method to design convolution layers was proposed. Firstly, based on the convolution layer of residual network, the input and output eigenvectors in the convolution layer were divided into high frequencies and low frequencies, and the channel width and height of the low-frequencies were reduced to half of those of the high frequencies. Then, the convolution operation between the low-frequency vector and the high-frequency vector was realized by up-sampling the low-frequency vector with the reduction by half, and the convolution operation between the high-frequency vector and the low-frequency vector was realized by average pooling of the high-frequency vector. Finally, the convolutions between high-frequency vectors and between high-frequency vector and low-frequency vector were added to obtain the high-frequency output, and the convolutions between low-frequency vectors and between low-frequency vector and high-frequency vector were added to obtain the low-frequency output. In this way, Octave convolution layer was constructed, and all convolution layers in residual network were replaced by Octave convolution layers to construct the detection model. In theory, the amount of computation of convolution in Octave convolution layer was reduced by 75%, effectively speeding up the training of the model. On the cloud server with maximum memory of 13 GB and free disk size of 4.9 GB, the PCam (PatchCamelyon) dataset was used for testing. The results show that the model has the recognition accuracy of 95.1%, the memory occupied of 8.7 GB, the disk occupied of 356.4 MB, and the average single training time of 4 minutes 42 seconds. Compared with the ResNet50, this model has the accuracy reduced by 0.6%, the memory saved by 0.6 GB, the disk saved by 105.9 MB, and the single training time shortened by 1 minute. The experimental results demonstrate that the proposed model has high recognition accuracy, short training time and small memory consumption, which reduces the requirement of computing resources under the background of big data era, making the model have application value.
Reference | Related Articles | Metrics
Underwater image super-resolution reconstruction method based on deep learning
CHEN Longbiao, CHEN Yuzhang, WANG Xiaochen, ZOU Peng, HU Xuemin
Journal of Computer Applications    2019, 39 (9): 2738-2743.   DOI: 10.11772/j.issn.1001-9081.2019020353
Abstract576)      PDF (893KB)(453)       Save

Due to the characteristics of water itself and the absorption and scattering of light by suspended particles in the water, a series of problems, such as low Signal-to-Noise Ratio (SNR) and low resolution, exist in underwater images. Most of the traditional processing methods include image enhancement, restoration and reconstruction rely on degradation model and have ill-posed algorithm problem. In order to further improve the effects and efficiency of underwater image restoration algorithm, an improved image super-resolution reconstruction method based on deep convolutional neural network was proposed. An Improved Dense Block structure (IDB) was introduced into the network of the method, which can effectively solve the gradient disappearance problem of deep convolutional neural network and improve the training speed at the same time. The network was used to train the underwater images before and after the degradation by registration and obtained the mapping relation between the low-resolution image and the high-resolution image. The experimental results show that on a self-built underwater image training set, the underwater image reconstructed by the deep convolutional neural network with IDB has the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) improved by 0.38 dB and 0.013 respectively, compared with SRCNN (an image Super-Resolution method using Conventional Neural Network) and proposed method can effectively improve the reconstruction quality of underwater images.

Reference | Related Articles | Metrics
Medical image super-resolution reconstruction based on depthwise separable convolution and wide residual network
GAO Yuan, WANG Xiaochen, QIN Pinle, WANG Lifang
Journal of Computer Applications    2019, 39 (9): 2731-2737.   DOI: 10.11772/j.issn.1001-9081.2019030413
Abstract402)      PDF (1073KB)(353)       Save

In order to improve the quality of medical image super-resolution reconstruction, a wide residual super-resolution neural network algorithm based on depthwise separable convolution was proposed. Firstly, the depthwise separable convolution was used to improve the residual block of the network, widen the channel of the convolution layer in the residual block, and pass more feature information into the activation function, making the shallow low-level image features in the network easier transmitted to the upper level, so that the quality of medical image super-resolution reconstruction was enhanced. Then, the network was trained by group normalization, the channel dimension of the convolutional layer was divided into groups, and the normalized mean and variance were calculated in each group, which made the network training process converge faster, and solved the difficulty of network training because the depthwise separable convolution widens the number of channels. Meanwhile, the network showed better performance. The experimental results show that compared with the traditional nearest neighbor interpolation, bicubic interpolation super-resolution algorithm and the super-resolution algorithm based on sparse expression, the medical image reconstructed by the proposed algorithm has richer texture detail and more realistic visual effects. Compared with the super-resolution algorithm based on convolutional neural network, the super-resolution neural network algorithm based on wide residual and the generative adversarial-network super-resolution algorithm, the proposed algorithm has a significant improvement in PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural SIMilarity index).

Reference | Related Articles | Metrics
Accelerated KAZE-SIFT feature extraction algorithm for oblique images
BO Dan, LI Zongchun, WANG Xiaonan, QIAO Hanwen
Journal of Computer Applications    2019, 39 (7): 2093-2097.   DOI: 10.11772/j.issn.1001-9081.2018122564
Abstract444)      PDF (840KB)(314)       Save

Concerning that traditional vertical image feature extraction algorithms have poor effect on oblique image matching, a feature extraction algorithm, based on Accelerated KAZE (AKAZE) and Scale Invariant Feature Transform (SIFT) algorithm called AKAZE-SIFT was proposed. Firstly, in order to guarantee the accuracy and distinctiveness of image feature detection, AKAZE operator, which fully preserves the contour information of image, was utilized for feature detection. Secondly, the robust SIFT operator was used to improve the stability of feature description. Thirdly, the rough matching point pairs were determined by the Euclidean distance between object feature point vector and candidate feature point vectors. Finally, the homography constraint was applied to improve the matching purity by random sample consensus algorithm. To evaluate the performance of the feature extraction algorithm, the blur, rotation, brightness, viewpoint and scale changes under the condition of oblique photography were simulated. The experimental results show that compared with SIFT algorithm and AKAZE algorithm, the recall of AKAZE-SIFT is improved by 12.8% and 5.3% respectively, the precision of AKAZE-SIFT is increased by 6.5% and 6.1% respectively, the F1 measure of AKAZE-SIFT is elevated by 13.8% and 5.6% respectively and the efficiency of the proposed algorithm is higher than that of SIFT and slightly worse than that of AKAZE. For the excellent detection and description performance, AKAZE-SIFT algorithm is more suitable for oblique image feature extraction.

Reference | Related Articles | Metrics
Denoising autoencoder based extreme learning machine
LAI Jie, WANG Xiaodan, LI Rui, ZHAO Zhenchong
Journal of Computer Applications    2019, 39 (6): 1619-1625.   DOI: 10.11772/j.issn.1001-9081.2018112246
Abstract395)      PDF (1055KB)(283)       Save
In order to solve the problem that parameter random assignment reduces the robustness of the algorithm and the performance is significantly affected by noise of Extreme Learning Machine (ELM), combining Denoising AutoEncoder (DAE) with ELM algorithm, a DAE based ELM (DAE-ELM) algorithm was proposed. Firstly, a denoising autoencoder was used to generate the input data, input weight and hidden layer parameters of ELM. Then, the hidden layer output was obtained through ELM to complete the training of classifier. On the one hand, the advantages of DAE were inherited by the algorithm, which means the features extracted automatically were more representative and robust and were impervious to noise. On the other hand, the randomness of parameter assignment of ELM was overcome and the robustness of the algorithm was improved. The experimental results show that, compared to ELM, Principal Component Analysis ELM (PCA-ELM), SAA-2, the classification error rate of DAE-ELM at least decreases 5.6% on MNIST, 3.0% on Fashion MINIST, 2.0% on Rectangles and 12.7% on Convex.
Reference | Related Articles | Metrics
Adaptive window regression method for face feature point positioning
WEI Jiawang, WANG Xiao, YUAN Yubo
Journal of Computer Applications    2019, 39 (5): 1459-1465.   DOI: 10.11772/j.issn.1001-9081.2018102057
Abstract412)      PDF (1191KB)(290)       Save
Focused on the low positioning accuracy of Explicit Shape Regression (ESR) for some facical occlusion and excessive facial expression samples, an adaptive window regression method was proposed. Firstly, the priori information was used to generate an accurate face area box for each image, feature mapping of faces was performed by using the center point of the face area box, and similar transformation was performed to obtain multiple initial shapes. Secondly, an adaptive window adjustment strategy was given, in which the feature window size was adaptively adjusted based on the mean square error of the previous regression. Finally, based on the feature selection strategy of Mutual Information (MI), a new correlation calculation method was proposed, and the most relevant features were selected in the candidate pixel set. On the three public datasets LFPW, HELEN and COFW, the positioning accuracy of the proposed method is increased by 7.52%, 5.72% and 5.89% respectively compared to ESR algorithm. The experimental results show that the adaptive window regression method can effectively improve the positioning accuracy of face feature points.
Reference | Related Articles | Metrics
Dense subgraph based telecommunication fraud detection approach in bank
LIU Xiao, WANG Xiaoguo
Journal of Computer Applications    2019, 39 (4): 1214-1219.   DOI: 10.11772/j.issn.1001-9081.2018091861
Abstract740)      PDF (890KB)(317)       Save
Lack of labeled data accumulated for telecommunication fraud in the bank and high cost of manually labeling cause the insufficiency of labeled data that can be used in supervised learning methods for telecommunication fraud detection. To solve this problem, an unsupervised learning method based on dense subgraph was proposed to detect telecommunication fraud. Firstly, subgraphs with high anomaly degree in the network of accounts and resources (IP addresses and MAC addresses) were searched to identify fraud accounts. Then, a subgraph anomaly degree metric satisfying the features of telecommunication fraud was designed. Finally, a suspicious subgraph searching algorithm with resident disk, efficient memory and theory guarantee was proposed. On two synthetic datasets, the F1-scores of the proposed method are 0.921 and 0.861, which are higher than those of CrossSpot, fBox and EvilCohort algorithms while very close to those of M-Zoom algorithm (0.899 and 0.898), but the average running time and memory consumption peak of the proposed method are less than those of M-Zoom algorithm. On real-world dataset, F1-score of the proposed method is 0.550, which is higher than that of fBox and EvilCohort while very close to that of M-Zoom algorithm (0.529). Theoretical analysis and simulation results show that the proposed method can be applied to telecommunication fraud detection in the bank effectively, and is suitable for big datasets in practice.
Reference | Related Articles | Metrics
Feature point localization of left ventricular ultrasound image based on convolutional neural network
ZHOU Yujin, WANG Xiaodong, ZHANG Lige, ZHU Kai, YAO Yu
Journal of Computer Applications    2019, 39 (4): 1201-1207.   DOI: 10.11772/j.issn.1001-9081.2018091931
Abstract508)      PDF (1169KB)(331)       Save
In order to solve the problem that the traditional cascaded Convolutional Neural Network (CNN) has low accuracy of feature point localization in left ventricular ultrasound image, an improved cascaded CNN with region extracted by Faster Region-based CNN (Faster-RCNN) model was proposed to locate the left ventricular endocardial and epicardial feature points in ultrasound images. Firstly, the traditional cascaded CNN was improved by a structure of two-stage cascaded. In the first stage, an improved convolutional network was used to roughly locate the endocardial and epicardial joint feature points. In the second stage, four improved convolutional networks were used to fine-tune the endocardial feature points and the epicardial feature points separately. After that, the positions of joint contour feature points were output. Secondly, the improved cascaded CNN was merged with target region extraction, which means that the target region containing the left ventricle was extracted by the Faster-RCNN model and then was sent into the improved cascaded CNN. Finally, the left ventricular contour feature points were located from coarse to fine. Experimental results show that compared with the traditional cascaded CNN, the proposed method is much more accurate in left ventricle feature point localization, and its prediction points are closer to the actual values. Under the root mean square error evaluation standard, the accuracy of feature point localization is improved by 32.6 percentage points.
Reference | Related Articles | Metrics
Generalization error bound guided discriminative dictionary learning
XU Tao, WANG Xiaoming
Journal of Computer Applications    2019, 39 (4): 940-948.   DOI: 10.11772/j.issn.1001-9081.2018081785
Abstract615)      PDF (1327KB)(349)       Save
In the process of improving discriminant ability of dictionary, max-margin dictionary learning methods ignore that the generalization of classifiers constructed by reacquired data is not only in relation to the principle of maximum margin, but also related to the radius of Minimum Enclosing Ball (MEB) containing all the data. Aiming at the fact above, Generalization Error Bound Guided discriminative Dictionary Learning (GEBGDL) algorithm was proposed. Firstly, the discriminant condition of Support Vector Guided Dictionary Learning (SVGDL) algorithm was improved based on the upper bound theory of about the generalization error of Support Vector Machine (SVM). Then, the SVM large margin classification principle and MEB radius were used as constraint terms to maximize the margin between different classes of coding vectors, and to minimum the MEB radius containing all coding vectors. Finally, as the generalization of classifier being better considered, the dictionary, coding coefficients and classifiers were updated respectively by alternate optimization strategy, obtaining the classifiers with larger margin between the coding vectors, making the dictionary learn better to improve dictionary discriminant ability. The experiments were carried out on a handwritten digital dataset USPS, face datasets Extended Yale B, AR and ORL, object dataset Caltech 101, COIL20 and COIL100 to discuss the influence of hyperparameters and data dimension on recognition rate. The experimental results show that in most cases, the recognition rate of GEBGDL is higher than that of Label Consistent K-means-based Singular Value Decomposition (LC-KSVD), Locality Constrained and Label Embedding Dictionary Learning (LCLE-DL), Fisher Discriminative Dictionary Learning (FDDL) and SVGDL algorithm, and is also higher than that of Sparse Representation based Classifier (SRC), Collaborative Representation based Classifier (CRC) and SVM.
Reference | Related Articles | Metrics
Krill herd algorithm based on generalized opposition-based learning and its application in data clustering
DING Cheng, WANG Qiuping, WANG Xiaofeng
Journal of Computer Applications    2019, 39 (2): 336-342.   DOI: 10.11772/j.issn.1001-9081.2018061437
Abstract399)      PDF (963KB)(323)       Save
In order to solve the problem of premature convergence caused by the decrease of population diversity in the optimization process of Krill Herd (KH) algorithm, an improved krill herd algorithm based on Generalized Opposition-Based Learning was proposed, namely GOBL-KH. Firstly, step size factors were determined by cosine decreasing strategy to balance the exploration and exploitation ability of the algorithm. Then, a generalized opposition-based learning strategy was added to search each krill, which enhanced the ability of the krill to explore the neighborhood space around it. The proposed algorithm was tested on fifteen benchmark functions and compared with the original KH algorithm, KH with Linear Decreasing step (KHLD) and KH with Cosiner Decreasing step (KHCD). The experimental results show that the proposed algorithm can effectively avoid premature and has higher accuracy. In order to demonstrate the effectiveness of the proposed algorithm, it was combined with K-means algorithm to solve the data clustering problem, namely HK-KH. In this fusion algorithm, after each iteration, the worst individual was replaced by the optimal individual or a new individual after the K-means iteration. Five datasets of UCI were used to test HK-KH algorithm and the results were compared with the K-means, Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), KH, KH Clustering Algorithm (KHCA), Improved KH (IKH) algorithm for clustering problems. The experimental results show that HK-KH algorithm is suitable to solve the data clustering problem and has strong global convergence and high stability.
Reference | Related Articles | Metrics
Selective encryption scheme based on Logistic and Arnold transform in high efficiency video coding
ZHOU Yizhao, WANG Xiaodong, ZHANG Lianjun, LAN Qiongqiong
Journal of Computer Applications    2019, 39 (10): 2973-2979.   DOI: 10.11772/j.issn.1001-9081.2019040742
Abstract318)      PDF (1054KB)(204)       Save
In order to effectively protect video information, according to the characteristics of H.265/HEVC (High Efficiency Video Coding), a scheme combining transform coefficient scrambling and syntax element encryption was proposed. For Transform Unit (TU), the TU with the size of 4×4 was scrambled by Arnold transform. At the same time, a shift cipher was designed, and the cipher was initialized according to the approximate distribution rule of the Direct Current (DC) coefficient of the TU, and the DC coefficients of TU with the size of 8×8, 16×16 and 32×32 were shifting encrypted using encryption map generated by Arnold transform. For some of the syntax elements with bypass coding used in the entropy coding process, the Logistic chaotic sequence was used for encryption. After encryption, the Peak Signal-to-Noise Ratio (PSNR) and Structual Similarity (SSIM) of the video were decreased by 26.1 dB and 0.51 respectively on average, while the compression ratio was only decreased by 1.126% and the coding time was only increased by 0.17%. Experimental results show that under the premise of ensuring better encryption effect and less impact on bit rate, the proposed scheme has less extra coding overhead and is suitable for real-time video applications.
Reference | Related Articles | Metrics
Binocular camera multi-pose calibration method based on radial alignment constraint algorithm
YANG Shangkun, WANG Yansong, GUO Hui, WANG Xiaolan, LIU Ningning
Journal of Computer Applications    2018, 38 (9): 2655-2659.   DOI: 10.11772/j.issn.1001-9081.2018020503
Abstract944)      PDF (720KB)(378)       Save
In binocular stereo vision, the camera needs to be calibrated to obtain its internal and external parameters in 3D measurement or precise positioning of the object.Through the study of the camera model with first-order radial distortion, linear formulas of solving internal and external parameters were constructed based on Radial Alignment Constraint (RAC) calibration method. Inclination angle, rotation angle, pitch angle and main distortion elements of lens were taken into consideration in this algorithm, which modified the defects in the traditional RAC calibration method that it only considers radial distortion and some parameters need priori values. The 3D reconstruction experiment of multi-pose binocular camera was carried out by using the obtained internal and external parameters. The experimental results show that,the reprojection error of this calibration method is distributed in[-0.3,0.3], and the similarity between the measurement trajectory and the actual trajectory is 96%, which has a positive effect on reducing the error rate of binocular stereo vision 3D measurement.
Reference | Related Articles | Metrics
Data governance collaborative method based on blockchain
SONG Jundian, DAI Bingrong, JIANG Liwen, ZHAO Yao, LI Chao, WANG Xiaoqiang
Journal of Computer Applications    2018, 38 (9): 2500-2506.   DOI: 10.11772/j.issn.1001-9081.2018030594
Abstract1400)      PDF (1276KB)(699)       Save
To solve the problem of inconsistent data standards, uneven data quality, and compromised data security privacy in current data governance process, a new blockchain-based data governance collaboration method which integrates the characteristics of multi-party cooperation, security and reliability of blockchain was proposed and applied to the construction of data standards, the protection of data security, and the control of data sharing processes. Based on data governance requirements and blockchain characteristics, a collaborative data governance model was formed. A multi-collaborative data standard process, an efficient data standard construction and update mechanism, and secure data sharing and access control were developed afterwards. So this blockchain-based data governance collaboration method could be implemented to improve the efficiency and security of data standardization work. The experimental and analysis results show that the proposed method has a significant improvement in the efficiency of the standard term application time than the traditional data standard construction method. Especially in the big data environment, the application of the smart contract improves the time efficiency. The distributed storage of the blockchain provides powerful basis and guarantee for system security, user behavior traceability and auditing. Methods mentioned above provide a good application demonstration for data governance and a reference for the industry's metadata management, data standards sharing and application.
Reference | Related Articles | Metrics
Compression method based on bit extraction of independent rule sets for packet classification
WANG Xiaolong, LIU Qinrang, LIN Senjie, Huang Yajing
Journal of Computer Applications    2018, 38 (8): 2375-2380.   DOI: 10.11772/j.issn.1001-9081.2018010069
Abstract504)      PDF (940KB)(304)       Save
The continuous expansion in scale of multi-field entries and the growing increase in bit-width bring heavy storage pressure in hardware on the Internet. In order to solve this problem, a compression method based on Bit Extraction of Independent rule Subsets (BEIS) was proposed. Firstly, some fields were merged based on the logical relationships among multiple match fields, thus reducing the number of match fields and the width of flow tables. Secondly, with the division of independent rule subsets for the merged rule set, some differentiate bits in the divided subsets were extracted to achieve the matching and searching function, further reducing the used Ternary Content Addressable Memory (TCAM) space. Finally, the lookup hardware architecture of this method was put forward. Simulation results show that, with certain time complexity, the storage space of the proposed method can be reduced by 20% compared with Field Trimmer (FT) in OpenFlow flow table; in addition, for common packet classification rule sets such as access control list and firewall in practical application, the compression ratio of 20%-40% can be achieved.
Reference | Related Articles | Metrics
Cyanobacterial bloom forecast method based on genetic algorithm-first order lag filter and long short-term memory network
YU Jiabin, SHANG Fangfang, WANG Xiaoyi, XU Jiping, WANG Li, ZHANG Huiyan, ZHENG Lei
Journal of Computer Applications    2018, 38 (7): 2119-2123.   DOI: 10.11772/j.issn.1001-9081.2017122959
Abstract601)      PDF (1003KB)(419)       Save
The process of algal bloom evolution in rivers or lakes has characteristics of suddenness and uncertainty, which leads to low prediction accuracy of algal bloom. To solve this problem, chlorophyll a concentration was used as the surface index of cyanobacteria bloom evolution process, and a cyanobacterial bloom forecast model based on Long Short-Term Memory (LSTM) and Recurrent Neural Network (RNN) was proposed. Firstly, the improved Genetic algorithm-First order lag filter (GF) optimization algorithm was taken as data smoothing filter. Secondly, a GF-LSTM network model was built to accurately predict the cyanobacterial bloom. Finally, the data sampled from Meiliang Lake in Taihu area were used to test the forecast model, and then the model was compared with the traditional RNN and LSTM network. The experimental results show that, the mean relative error of the proposed GF-LSTM network model is 16%-18%, lower than those of RNN model (28%-32%) and LSTM network model (19%-22%). The proposed model has good effect on data smoothing filtering, higher prediction accuracy and better adaptability to samples. It also avoids two widely known issues of gradient vanishing and gradient exploding when using traditional RNN model during long term training.
Reference | Related Articles | Metrics
Differential privacy budget allocation method for data of tree index
WANG Xiaohan, HAN Huihui, ZHANG Zepei, YU Qingying, ZHENG Xiaoyao
Journal of Computer Applications    2018, 38 (7): 1960-1966.   DOI: 10.11772/j.issn.1001-9081.2018010014
Abstract938)      PDF (1075KB)(355)       Save
Noise is required in differential privacy protection for spatial data with tree index. Most of the existing differential privacy budget methods adopt uniform allocation, and ordinary users can not personalize their choice. To solve this problem, an arithmetic sequence privacy budget allocation method and a geometric sequence privacy budget allocation method were proposed. Firstly, the spatial data was indexed by tree structure. Secondly, users could personalize the difference or ratio of privacy budgets assigned by two adjacent layers to dynamically adjust the privacy budget according to the needs of privacy protection and query accuracy. Finally, the privacy budget was allocated to each layer of tree to realize personalized and on-demand allocation. Theoretical analysis and experimental results show that these two methods are more flexible in the allocation of privacy budget than the uniform allocation method, and the geometric sequence allocation method is better than the arithmetic sequence allocation method.
Reference | Related Articles | Metrics
Distribution analysis method of industrial waste gas for non-detection zone based on bi-directional error multi-layer neural network
WANG Liwei, WANG Xiaoyi, WANG Li, BAI Yuting, LU Yutian
Journal of Computer Applications    2018, 38 (5): 1500-1504.   DOI: 10.11772/j.issn.1001-9081.2017102606
Abstract291)      PDF (893KB)(391)       Save
Industrial waste gas has accounted for about 70% of the atmospheric pollution sources. It is crucial to establish a full-scale and reasonable monitoring mechanism. However, the monitoring area is so large and monitoring devices can not be set up in some special areas. Besides, it is difficult to model the gas distribution according with the actual. To solve the practical and theoretical problems, an analysis method of industrial waste gas distribution for non-detection zone was proposed based on a Bi-directional Error Multi-Layer Neural Network (BEMNN). Firstly, the monitoring mechanism was introduced in the thought of "monitoring in boundary and inference of dead zone", which aimed to offset the lack of monitoring points in some areas. Secondly, a multi-layer combination neural network was proposed in which the errors propagate in a bi-directional mode. The network was used to model the gas distribution relationship between the boundary and the dead zone. Then the gas distribution in the dead zone could be predicted with the boundary monitoring data. Finally, an experiment was conducted based on the actual monitoring data of an industrial park. The mean absolute error was less than 28.83 μg and the root-mean-square error was less than 45.62 μg. The relative error was between 8% and 8.88%. The results prove the feasibility of the proposed method, which accuracy can meet the practical requirement.
Reference | Related Articles | Metrics
Fast intra mode prediction decision and coding unit partition algorithm based on high efficiency video coding
GUO Lei, WANG Xiaodong, XU Bowen, WANG Jian
Journal of Computer Applications    2018, 38 (4): 1157-1163.   DOI: 10.11772/j.issn.1001-9081.2017092302
Abstract355)      PDF (1218KB)(376)       Save
Due to the high complexity of intra encoding in High Efficiency Video Coding (HEVC), an efficient intra encoding algorithm combining coding unit segmentation and intra mode selection based on texture feature was proposed. The strength of dominant direction of each depth layer was used to decide whether the Coding Unit (CU) need segmentation, and to reduce the number of intra modes. Firstly, the variance of pixels was used in the coding unit, and the strength of dominant direction based on pixel units to was calculated determine its texture direction complexity, and the final depth was derived by means of the strategy of threshold. Secondly, the relation of vertical complexity and horizontal complexity and the probability of selected intra model were used to choose a subset of prediction modes, and the encoding complexity was further reduced. Compared to HM15.0, the proposed algorithm saves 51.997% encoding time on average, while the Bjontegaard Delta Peak Signal-to-Noise Rate (BDPSNR) only decreases by 0.059 dB and the Bjontegaard Delta Bit Rate (BDBR) increases by 1.018%. The experimental results show that the method can reduce the encoding complexity in the premise of negligible RD performance loss, which is beneficial to real-time video applications of HEVC standard.
Reference | Related Articles | Metrics
Multi-scale network replication technology for fusion of virtualization and digital simulation
WU Wenyan, JIANG Xin, WANG Xiaofeng, LIU Yuan
Journal of Computer Applications    2018, 38 (3): 746-752.   DOI: 10.11772/j.issn.1001-9081.2017081956
Abstract561)      PDF (1193KB)(398)       Save
The network replication technology has become the cornerstone of the evaluation platform for network security experiments and the system for network emulation. Facing the requirements of fidelity and scalability of network replication, a multi-scale network replication technology based on cloud platform for the fusion of lightweight virtualization, full virtualization and digital simulation was proposed. The architecture of the seamless fusion of these three scales was introduced at the beginning; And then the network construction technology based on the architecture was studied. The emulation experimental results show that the emulation network which is built with the construction technology has the characteristics of flexibility, transparency and concurrency; in addition, the construction technology is capable of emulating networks with high extensibility. At last, communication tests for a variety of protocols and simple network security experiments on the large-scale emulation network were conducted to verify the availability of this large-scale emulation network. The extensive experimental results show that the multi-scale network replication technology for the fusion of virtualization and digital simulation can be used as the powerful support for creating large-scale emulation networks.
Reference | Related Articles | Metrics
Firefly algorithm based on uniform local search and variable step size
WANG Xiaojing, PENG Hu, DENG Changshou, HUANG Haiyan, ZHANG Yan, TAN Xujie
Journal of Computer Applications    2018, 38 (3): 715-721.   DOI: 10.11772/j.issn.1001-9081.2017082039
Abstract484)      PDF (1137KB)(481)       Save
Since the convergence speed of the Firefly Algorithm (FA) is slow, and the solution accuracy of the FA is low, an improved Firefly Algorithm with Uniform local search and Variable step size (UVFA) was proposed. Firstly, uniform local search was established by the uniform design theory to accelerate convergence and to enhance exploitation ability. Secondly, search step size was dynamically tuned by using the variable step size strategy to balance exploration and exploitation. Finally, uniform local search and variable step size were fused. The results of simulation tests on twelve benchmark functions show that the objective function mean of UVFA was significantly better than FA, WSSFA (Wise Step Strategy for Firefly Algorithm), VSSFA (Variable Step Size Firefly Algorithm) and Uniform local search Firefly Algorithm (UFA), and the time complexity was obviously reduced. UVFA is good at solving low dimensional and high dimensional problems, and has good robustness.
Reference | Related Articles | Metrics
Transfer learning based hierarchical attention neural network for sentiment analysis
QU Zhaowei, WANG Yuan, WANG Xiaoru
Journal of Computer Applications    2018, 38 (11): 3053-3056.   DOI: 10.11772/j.issn.1001-9081.2018041363
Abstract984)      PDF (759KB)(837)       Save
The purpose of document-level sentiment analysis is to predict users' sentiment expressed in the document. Traditional neural network-based methods rely on unsupervised word vectors. However, the unsupervised word vectors cannot exactly represent the contextual relationship of context and understand the context. Recurrent Neural Network (RNN) generally used to process sentiment analysis problems has complex structure and numerous model parameters. To address the above issues, a Transfer Learning based Hierarchical Attention Neural Network (TLHANN) was proposed. Firstly, an encoder was trained to understand the context with machine translation task for generating hidden vectors. Then, the encoder was transferred to sentiment analysis task by concatenating the hidden vector generated by the encoder with the corresponding unsupervised vector. The contextual relationship of context could be better represented by distributed representation. Finally, a two-level hierarchical network was applied to sentiment analysis task. A simplified RNN unit called Minimal Gate Unit (MGU) was arranged at each level leading to fewer parameters. The attention mechanism was used in the model for extracting important information. The experimental results show that, the accuracy of the proposed algorithm is increased by an avervage of 8.7% and 23.4% compared with the traditional neural network algorithm and Support Vector Machine (SVM).
Reference | Related Articles | Metrics
New traffic classification method for imbalanced network data
YAN Binghao, HAN Guodong, HUANG Yajing, WANG Xiaolong
Journal of Computer Applications    2018, 38 (1): 20-25.   DOI: 10.11772/j.issn.1001-9081.2017071812
Abstract571)      PDF (921KB)(469)       Save
To solve the problem existing in traffic classification that Peer-to-Peer (P2P) traffic is much more than that of non-P2P, a new traffic classification method for imbalanced network data was presented. By introducing and improving Synthetic Minority Over-sampling Technique (SMOTE) algorithm, a Mean SMOTE (M-SMOTE) algorithm was proposed to realize the balance of traffic data. On the basis of this, three kinds of machine learning classifiers:Random Forest (RF), Support Vector Machine (SVM), Back Propagation Neural Network (BPNN) were used to identify the various types of traffic. The theoretical analysis and simulation results show that, compared with the imbalanced state, the SMOTE algorithm improves the recognition accuracy of non-P2P traffic by 16.5 percentage points and raises the overall recognition rate of network traffic by 9.5 percentage points. Compared with SMOTE algorithm, the M-SMOTE algorithm further improves the recognition rate of non-P2P traffic and the overall recognition rate of network traffic by 3.2 percentage points and 2.6 percentage points respectively. The experimental results show that the way of imbalanced data classification can effectively solve the problem of low P2P traffic recognition rate caused by excessive P2P traffic, and the M-SMOTE algorithm has higher recognition accuracy rate than SMOTE.
Reference | Related Articles | Metrics